66 research outputs found

    Anatomy and physiology of word‑selective visual cortex: from visual features to lexical processing

    Get PDF
    Published: 12 October 2021Over the past 2 decades, researchers have tried to uncover how the human brain can extract linguistic information from a sequence of visual symbols. The description of how the brain’s visual system processes words and enables reading has improved with the progressive refinement of experimental methodologies and neuroimaging techniques. This review provides a brief overview of this research journey. We start by describing classical models of object recognition in non-human primates, which represent the foundation for many of the early models of visual word recognition in humans. We then review functional neuroimaging studies investigating the word-selective regions in visual cortex. This research led to the differentiation of highly specialized areas, which are involved in the analysis of different aspects of written language. We then consider the corresponding anatomical measurements and provide a description of the main white matter pathways carrying neural signals crucial to word recognition. Finally, in an attempt to integrate structural, functional, and electrophysiological findings, we propose a view of visual word recognition, accounting for spatial and temporal facets of word-selective neural processes. This multi-modal perspective on the neural circuitry of literacy highlights the relevance of a posterior–anterior differentiation in ventral occipitotemporal cortex for visual processing of written language and lexical features. It also highlights unanswered questions that can guide us towards future research directions. Bridging measures of brain structure and function will help us reach a more precise understanding of the transformation from vision to language.This work was supported by European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no. 837228 and Rita Levi Montalcini fellowship to SC, NICHD R01-HD095861 and Jacobs Foundation Research Fellowship to JDY, Stanford Maternal and Child Health Research Institute award to IK, and the Zuckerman-CHE STEM Leadership Program to MY

    Diffusional Kurtosis Imaging in the Diffusion Imaging in Python Project.

    Get PDF
    Diffusion-weighted magnetic resonance imaging (dMRI) measurements and models provide information about brain connectivity and are sensitive to the physical properties of tissue microstructure. Diffusional Kurtosis Imaging (DKI) quantifies the degree of non-Gaussian diffusion in biological tissue from dMRI. These estimates are of interest because they were shown to be more sensitive to microstructural alterations in health and diseases than measures based on the total anisotropy of diffusion which are highly confounded by tissue dispersion and fiber crossings. In this work, we implemented DKI in the Diffusion in Python (DIPY) project-a large collaborative open-source project which aims to provide well-tested, well-documented and comprehensive implementation of different dMRI techniques. We demonstrate the functionality of our methods in numerical simulations with known ground truth parameters and in openly available datasets. A particular strength of our DKI implementations is that it pursues several extensions of the model that connect it explicitly with microstructural models and the reconstruction of 3D white matter fiber bundles (tractography). For instance, our implementations include DKI-based microstructural models that allow the estimation of biophysical parameters, such as axonal water fraction. Moreover, we illustrate how DKI provides more general characterization of non-Gaussian diffusion compatible with complex white matter fiber architectures and gray matter, and we include a novel mean kurtosis index that is invariant to the confounding effects due to tissue dispersion. In summary, DKI in DIPY provides a well-tested, well-documented and comprehensive reference implementation for DKI. It provides a platform for wider use of DKI in research on brain disorders and in cognitive neuroscience

    Evaluating the accuracy of diffusion MRI models in white matter

    Full text link
    Models of diffusion MRI within a voxel are useful for making inferences about the properties of the tissue and inferring fiber orientation distribution used by tractography algorithms. A useful model must fit the data accurately. However, evaluations of model-accuracy of some of the models that are commonly used in analyzing human white matter have not been published before. Here, we evaluate model-accuracy of the two main classes of diffusion MRI models. The diffusion tensor model (DTM) summarizes diffusion as a 3-dimensional Gaussian distribution. Sparse fascicle models (SFM) summarize the signal as a linear sum of signals originating from a collection of fascicles oriented in different directions. We use cross-validation to assess model-accuracy at different gradient amplitudes (b-values) throughout the white matter. Specifically, we fit each model to all the white matter voxels in one data set and then use the model to predict a second, independent data set. This is the first evaluation of model-accuracy of these models. In most of the white matter the DTM predicts the data more accurately than test-retest reliability; SFM model-accuracy is higher than test-retest reliability and also higher than the DTM, particularly for measurements with (a) a b-value above 1000 in locations containing fiber crossings, and (b) in the regions of the brain surrounding the optic radiations. The SFM also has better parameter-validity: it more accurately estimates the fiber orientation distribution function (fODF) in each voxel, which is useful for fiber tracking

    Combining Citizen Science and Deep Learning to Amplify Expertise in Neuroimaging

    Get PDF
    Big Data promises to advance science through data-driven discovery. However, many standard lab protocols rely on manual examination, which is not feasible for large-scale datasets. Meanwhile, automated approaches lack the accuracy of expert examination. We propose to (1) start with expertly labeled data, (2) amplify labels through web applications that engage citizen scientists, and (3) train machine learning on amplified labels, to emulate the experts. Demonstrating this, we developed a system to quality control brain magnetic resonance images. Expert-labeled data were amplified by citizen scientists through a simple web interface. A deep learning algorithm was then trained to predict data quality, based on citizen scientist labels. Deep learning performed as well as specialized algorithms for quality control (AUC = 0.99). Combining citizen science and deep learning can generalize and scale expert decision making; this is particularly important in disciplines where specialized, automated tools do not yet exist

    Development of the visual white matter pathways mediates development of electrophysiological responses in visual cortex

    Get PDF
    First published: 06 September 2021The latency of neural responses in the visual cortex changes systematically across the lifespan. Here, we test the hypothesis that development of visual white matter pathways mediates maturational changes in the latency of visual signals. Thirty-eight children participated in a cross-sectional study including diffusion magnetic resonance imaging (MRI) and magnetoencephalography (MEG) sessions. During the MEG acquisition, participants performed a lexical decision and a fixation task on words presented at varying levels of contrast and noise. For all stimuli and tasks, early evoked fields were observed around 100 ms after stimulus onset (M100), with slower and lower amplitude responses for low as compared to high contrast stimuli. The optic radiations and optic tracts were identified in each individual's brain based on diffusion MRI tractography. The diffusion properties of the optic radiations predicted M100 responses, especially for high contrast stimuli. Higher optic radiation fractional anisotropy (FA) values were associated with faster and larger M100 responses. Over this developmental window, the M100 responses to high contrast stimuli became faster with age and the optic radiation FA mediated this effect. These findings suggest that the maturation of the optic radiations over childhood accounts for individual variations observed in the developmental trajectory of visual cortex responses.H2020 Marie Skłodowska-Curie Actions, Grant/Award Number: 837228; Italian Ministry of University and Research (MIUR): Programma Rita Levi Montalcini; Jacobs Foundation Research Fellowship, Grant/Award Number: RF1MH121868-01; National Institute of Child Health and Human Development, Grant/Award Numbers: R01HD09586101, R21HD092771; National Research Foundation of Korea, Grant/Award Number: NRF-2019R1C1C1009383; NSF/ BSF, Grant/Award Number: BCS #155133

    Generating and Evaluating Tests for K-12 Students with Language Model Simulations: A Case Study on Sentence Reading Efficiency

    Full text link
    Developing an educational test can be expensive and time-consuming, as each item must be written by experts and then evaluated by collecting hundreds of student responses. Moreover, many tests require multiple distinct sets of questions administered throughout the school year to closely monitor students' progress, known as parallel tests. In this study, we focus on tests of silent sentence reading efficiency, used to assess students' reading ability over time. To generate high-quality parallel tests, we propose to fine-tune large language models (LLMs) to simulate how previous students would have responded to unseen items. With these simulated responses, we can estimate each item's difficulty and ambiguity. We first use GPT-4 to generate new test items following a list of expert-developed rules and then apply a fine-tuned LLM to filter the items based on criteria from psychological measurements. We also propose an optimal-transport-inspired technique for generating parallel tests and show the generated tests closely correspond to the original test's difficulty and reliability based on crowdworker responses. Our evaluation of a generated test with 234 students from grades 2 to 8 produces test scores highly correlated (r=0.93) to those of a standard test form written by human experts and evaluated across thousands of K-12 students.Comment: Accepted to EMNLP 2023 (Main

    Evaluating the Reliability of Human Brain White Matter Tractometry

    Get PDF
    Published Nov 17, 2021The validity of research results depends on the reliability of analysis methods. In recent years, there have been concerns about the validity of research that uses diffusion-weighted MRI (dMRI) to understand human brain white matter connections in vivo, in part based on the reliability of analysis methods used in this field. We defined and assessed three dimensions of reliability in dMRI-based tractometry, an analysis technique that assesses the physical properties of white matter pathways: (1) reproducibility, (2) test-retest reliability, and (3) robustness. To facilitate reproducibility, we provide software that automates tractometry (https://yeatmanlab.github.io/pyAFQ). In measurements from the Human Connectome Project, as well as clinical-grade measurements, we find that tractometry has high test-retest reliability that is comparable to most standardized clinical assessment tools. We find that tractometry is also robust: showing high reliability with different choices of analysis algorithms. Taken together, our results suggest that tractometry is a reliable approach to analysis of white matter connections. The overall approach taken here both demonstrates the specific trustworthiness of tractometry analysis and outlines what researchers can do to establish the reliability of computational analysis pipelines in neuroimaging.This work was supported through grant 1RF1MH121868- 01 from the National Institute of Mental Health/the BRAIN Initiative, through grant 5R01EB027585-02 to Eleftherios Garyfallidis (Indiana University) from the National Institute of Biomedical Imaging and Bioengineering, through Azure Cloud Computing Credits for Research & Teaching provided through the University of Washington’s Research Computing unit and the University of Washington eScience Institute, and NICHD R21HD092771 to Jason D. Yeatma

    Rapid online assessment of reading ability

    Get PDF
    Published18 March 2021An accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. However, that is rarely feasible due to the constraints imposed by standardized measures of reading ability which require test administration by trained clinicians or researchers. Here we explore whether a simple, two-alternative forced choice, time limited lexical decision task (LDT), self-delivered through the webbrowser, can serve as an accurate and reliable measure of reading ability. We found that performance on the LDT is highly correlated with scores on standardized measures of reading ability such as the Woodcock-Johnson Letter Word Identification test (r = 0.91, disattenuated r = 0.94). Importantly, the LDT reading ability measure is highly reliable (r = 0.97). After optimizing the list of words and pseudowords based on item response theory, we found that a short experiment with 76 trials (2–3 min) provides a reliable (r = 0.95) measure of reading ability. Thus, the self-administered, Rapid Online Assessment of Reading ability (ROAR) developed here overcomes the constraints of resourceintensive, in-person reading assessment, and provides an efficient and automated tool for effective online research into the mechanisms of reading (dis)ability.We would like to thank the Pavlovia and PsychoPy team for their support on the browser-based experiments. This work was funded by NIH NICHD R01HD09586101, research grants from Microsoft and Jacobs Foundation Research Fellowship to J.D.Y

    Speed accuracy tradeoff? Not so fast: Marginal changes in speed have inconsistent relationships with accuracy in real-world settings

    Get PDF
    The speed-accuracy tradeoff suggests that responses generated under time constraints will be less accurate. While it has undergone extensive experimental verification, it is less clear whether it applies in settings where time pressures are not being experimentally manipulated (but where respondents still vary in their utilization of time). Using a large corpus of 29 response time datasets containing data from cognitive tasks without experimental manipulation of time pressure, we probe whether the speed-accuracy tradeoff holds across a variety of tasks using idiosyncratic within-person variation in speed. We find inconsistent relationships between marginal increases in time spent responding and accuracy; in many cases, marginal increases in time do not predict increases in accuracy. However, we do observe time pressures (in the form of time limits) to consistently reduce accuracy and for rapid responses to typically show the anticipated relationship (i.e., they are more accurate if they are slower). We also consider analysis of items and individuals. We find substantial variation in the item-level associations between speed and accuracy. On the person side, respondents who exhibit more within-person variation in response speed are typically of lower ability. Finally, we consider the predictive power of a person's response time in predicting out-of-sample responses; it is generally a weak predictor. Collectively, our findings suggest the speed-accuracy tradeoff may be limited as a conceptual model in its application in non-experimental settings and, more generally, offer empirical results and an analytic approach that will be useful as more response time data is collected

    Author Correction: An analysis-ready and quality controlled resource for pediatric brain white-matter research

    Get PDF
    • …
    corecore